Notes � Iles Neuro, sound localisation

Greg Detre

Sunday, 04 November, 2001

Dr Iles, week 5

 

Essay title

How does the auditory system perform sound localisation? (made-up)

Possible alternatives

Assess the relative contributions of different parts of the auditory system in sound localisation.

How well can we understand the mechanisms of sound localization by studying the physiology of neurones at specific locations in the auditory pathways?

Develop a research strategy for investigating how human observers might track a moving auditory target.

How relevant are studies of auditory localisation mechanisms in birds to understanding auditory localisation in man?

With reference to experimental studies, explain how a person who is deaf in one ear might localise sounds in space.

Planned reading

Konishi, M. (1993), 'Listening with Two Ears', Scientific American, vol. 268, No. 4.

Notes � Kandel and Schwarz (1991), ch 32, �Hearing�, pg 481

Ohm proposed that the ear performs a type of Fourier spectral analysis � complex waveforms are simplified into the sum of many individual sine waves and cosine waves of appropriate frequencies, phases and amplitudes

three parts to the ear � outer, middle and inner ear

cochlea of the inner ear � a spiral bony canal filled with fluid that contains the sensory transduction apparatus, the organ of Corti

vestibular apparatus of the inner ear � important for maintaining body posture and integrating head/eye movements

sound is produced by vibrations � alternating compressions/rarefaction (increased/decreased pressure) of surrounding air, which radiates outward from the source as a pressure wave

pitch = frequency (the number of peaks that pass a given point per unit time)

middle C = 261Hz, human sensitivity 20-20,000Hz

amplitude = a maximum change in air pressure in either direction = loudness

sound pressure level (in decibels) = 20 log10 Pt/Pr (in newtons, N, per sq metre)

where:

Pt = test pressure

Pr = reference presure (20 �N/m2)

this scale was devised by Bell, who found that the Weber-Fechner law applies to hearing

incremental increases in subjective loudness correspond to equal increments of sound pressure level (SPL) regardless of the absolute value of the sound pressure level

the ear has a range of about 120dB, which corresponds to about 6 orders of magnitude of pressure

�Sounds reaching the ear travel through the external ear canal, or external auditory meatus (Latin, opening) and reach the middle ear, causing the tympanic (Latin, drum) membrane to vibrate. This vibration is then conveyed through the middle ear by a series of three small bones (ossicles), one of which the malleus (Latin, hammer) is attached to the tympanic membrane. The vibration of the malleus is transmitted to an opening in the cochlea, the oval window, by the other two ossicles, the incus (Latin, anvil) and the stapes (Latin, stirrup). The major components of the middle ear (tympanic membrane and ossicles) ensure that sounds from the air in the outer area are transmitted efficiently to the fluid-filled cochlea of the inner ear. Otherwise, sounds would reach the fluid at the oval window directly, and most of the sound energy would be reflected because fluid has a higher acoustic impedance than air, and so the sound pressure required for hearing would be higher.� Because the area of the tympani membrane is greater than the area of the oval window, the total pressure (force per unit area) acting on the smaller oval window is increased.

�The cochlea spirals for two-and-a-half turns around a central pillar called the modiolus (Latin, pillar or hub). The cochlea has three fluid-filled compartments or scalae (Italian, stairway): the scala tympani, which follows the outer contours of the cochlea; the scala vestibuli, which follows the inner contours and is continuous with the scale tympani at the helicotrema (Greek, spiral hole); and between these two, the scala media (or cochlear duct) which extends finger-like into the cochlear channel and ends blindly near the apical end of the cochlear.

�Sound entering the ear causes the stapes to oscillate, and these oscillations transmit energy to each of the three compartments. Because fluid is not compressible, the pressure being put on the fluid in the scala vestibuli causes an alternating outward and inward movement of the round window membrane of the scala tympani, as well as oscillating movements of the scala media and of the basilar membrane (the floor of the scala media). The organ of Corti, the sensory transduction apparatus in the scala media rests on the basilar membrane and is also stimulated by this movement. Thus, the cochlear compartments convert the differential pressure between the scala vestibuli and scala tympani into oscillating movements of the basilar membrane that excite and inhibit the sensory transducing cells in the organ of Corti.�

bone conduction = when sounds bypass the middle ear to reach the cochlear directly by vibration of the entire temporal bone

but this is inefficient and important only for audiological diagnosis (Rinne compared hearing-impaired patients� ability to detect air- vs bone-conducted sounds using a tuning fork in contact with the mastoid process of the temporal bone behind the ear, highlighting disruption of the air conductive apparatus)

thus distinguish 2 broad classes of deafness

conductive deafness � caused by damage to the inner ear (often surgically repairable)

sensorineural deafness � caused by damage to the cochlear, the eighth nerve or the central auditory pathway

the organ of Corti contains the (inner and outer) hair cells, the sensory receptor cells of the inner ear, as well as a variety of supporting cells

particular sets of hair cells are moved by motion initiated in a particular portion of the basilar membrane

there are three rows of outer hair cells and one row of inner hair cells (inner/outer from the modiolous (central pillar))

each hair cell has a bundle of stereocilia on its apical surface � they are stiff because they are filled with parallel arrays of cross-bridge actin filaments

these stereocilia project into (and are embedded in) the overlying tectorial membrane � if the tectorial membrane and the basilar membrane (in which the bodies of the hair cells rest) move with respect to one another, the stereocilia will be displaced/bent

this causes a change in ionic conductance at the apical surface of the cell, a current flow and voltage change - motion of the stereocilia hyper/depolarises the cell, by opening ion channels that produce an inward current � thus the oscillatory movement of the basilar membrane results in sinusoidal (depolarising-hyperpolarising) potential changes at the frequency of the sound

the hair cell releases chemical transmitter at the basal end, contacting the peripheral branches of axons of bipolar neurons whose cell bodies lie in the spiral ganglion and whose central axons constitute the auditory nerve

how are different frequencies of sound neurally encoded? Helmholtz noticed two interesting features:

the basilar membrane has cross striations (like the strings of a piano)

the basilar membrane varies in width from the base to the apex of the cochlea � narrow and stiff near the oval window and wide and flexible near the apex of the cochlea

Helmholtz resonance theory � the cross striations resonate with different frequencies of sounds (like piano strings which of different length/stiffness)

von Bekesy tested this and found that each sound does not lead to the resonance of only one narrow segment of the basilar membrane, but initates a traveling wave along the length of the cochlea that starts at the oval window (like snapping a rope tied at one end to a post)

different frequencies of sound produce different travelling waves with peak amplitudes at different points along the basilar membrane

at low frequences, the peak amplitude of the motion is near the apex of the cochlea (in the region of helicotrema)

as the frequency of the stimulus increase, the peak amplitude of motion occurs closer to the base of the cochlea

higher amplitudes cause the peak vibration to increase in displacement and a broader region of the membrae to vibrate

the representation of frequencies along the basilar membrane is logarithmic

the hair cells situated at the site where the oscillation is maximal are the most excited (occuring at the points predicted by Helmholtz�s resonance theory)

the hair cells within the organ of Corti differ from one another in their electromechanical properties � this may be very important in determining frequency selectivity

mechanical resonance � the hair cells themselves are differently tuned

at the base of the cochlea (narrow + stiff basilar membrane) � the outer hair cells and their stereocilia are short + stiff

at the apex (more flexible basilar membrane) � hair cells + stereocilia are much longer and more flexible

electrical resonance � the hair cell membrane shows spontaneous voltage fluctuations/oscillations in membrane potential around the resting potential

the characteristic frequency of the spontaneous electrical oscilation in each cell matches the frequency at which the cell is most responsive to mechanical stimuli

the interaction of the mechanical (the physical properties of the hair cell and its stereocilia) and electrical (the electrical membrane characteristics of the cell) resonances of the cell tune the hair cell to a particular frequency (like a tuned amplifier)

also, the outer hair cells can increase or decrease the length of their cell bodies in response to transcellular alternating current stimulation, when maintained in culture (Brownell)

in reverse, the ear itself can produce sounds, termed otoacoustic emissions (Kemp)

 

the hair cells are innervated by bipolar neurons of the spiral ganglion in the modiolus of the cochlea

there are about 33,000 spiral ganglion cells in the human cochlea

approximately 90% of the fibres innervate the inner hair cells

each of the 3000 inner hair cells receives contacts from about 10 fibres, and each fibre contacts one inner hair cells

the remaining 10% innervate many outer hair cells

there are also some efferent fibres synapsing on outer hair cells (which may be related to changing their length and tuning the organ of Corti and possibly the entire cochlear to sounds of particular interest) and on the afferent axons innervating inner hair cells

individual auditory nerve fibres respond to a particular characteristic frequency of sound, according to their tuning curve

given a maximum firing rate of 500Hz for a neuron, afferent fibres can�t use a one-to-one code

the firing pattern response of a brief tone is not instantaneous, but builds up

most sounds that are biologically significant to humans contain both amplitude-modulate and frequency-modulated components, so there must be a mechanism to de-modulate these components to receive the input signal � this requires information about the average characteristics of the response and its time structure

the volley principle (Weaver) � several fibres phase-locking (preferential firing at a particular point of the sound wave) at different cycles of a high-frequency stimulus might work in concert to signal high frequencies

the place principle � emphasises the importance of orderly connections between the auditory nerve and the brain as the basis of our ability to detect a broad range of sound frequencies, with each nerve fibre identified by the site (i.e. frequency) it innervates in the cochlea

detecting speech sounds is difficult because the vibrations of the mouth and tongue are much slower (c. 10Hz) than the vocal cords, and below the frequency sensitivity of the ear

like a frequency analyser, the auditory system looks for speech formants, spectral peaks at particular frequencies that characterise different vowel sounds � these are represented in specific temporal and spatial patters of nerve fibre discharges

an individual fibre may have spectral compoentns in its firing pattern at both the frequency of vocal cord vibration and at the lower modluating frequency imposed on the speech sounds by the resonances of mouth and tone, acting like a demodulator in a radio to extract significant low-frequency information from a high frequency carrier wave

 

auditory nerves in the eighth nerve terminate in the cochlear nucleus (on the external aspect of the inferior cerebellar peduncle), divided into dorsal and ventral (anteroventral and posteroventral)

tonotopic organisation � the topography of the cochlear nucleus

2 main cells of the ventral cochlear nucleus (Oertel). when depolarised by steady current pulse:

stellate cell � generates a spaced series of action potentials at regular intervals, called a chopper resopnse

bushy cell � generates only one or two spikes at the beginning of the pulse, signaling the onset and timing (important for localisation)

Bilateral auditory pathways provide cues to localise sound

the localisation of sounds in space is achieved in the brain by comparison of differences in the intensity and timing of sounds received in each ear

the duration of the delay between the sound being heard by both ears depends on the distance between the two ears, the speed of sound and the location of the sound source

if the sound is along the midline, front or back, the delay would be zero

at 90 to the right or left, the inter-aural delay would be up to 50�s � between tese extremes, there is a spectrum of interaural time difference

for low frequency sounds (<1400Hz), a continuous tone can be localised by phase difference

at higher frequencies, where the wavelength of the sound is less than the distance between the two ears, the phase or time difference of a continuous tone becomes ambiguous, because the phases could align along multiple cycles

at these frequencies, the head acts as a sound shield, reflecting and absorbing the shorter wavelengths of sound to produce an interaural intensity difference

Konishi used the owl�s brain as a model system for analysing sound localisation. �Neurons concerned with the detection of interaural timing differences are found in the laminar nucleus, a part of the central auditory pathway in the medulla of the bird�s brain. This bilateral nucleus receives fibres from the cochlear nuclei on either side and is organised tonotopically into a number of isofrequency zones, where the neurons and fibres share the same characteristic frequency. When a recording electrode is advanced through each isofrequency zone to record the response of successive fibres to a sound stimulus, orderly shifts in the time of arrival of spikes phase-locked to the sound stimulus are observed. Neurons in the isofrequency zones act as coincidence detectors by integrating inputs from fibres at the same frequency but at different interaural delays. Therefore frequency and time are mapped along orthogonal axes in the timing pathway of the brain.

�Interaural differences in sound intensity are analysed using excitatory and inhibitory interactions between inputs from the two ears in the intensity pathway of the brain. Neurons in the principal relay nucleus of this pathway are ecited by contralateral stimulation and inhibited by ipsilateral stimulation of the ear. When both ears are stimulated equally, excitation and inhibition balance, and there is little response from the post-synaptic cell. When sound amplitude is decreased in one ear, neurons in the intensity pathway on the same side respond, since ipsilateral inhibition has been decreased. Neurons are arranged according to the specific interaural intensity difference to which they are most responsive. At higher levels, the timing and intensity pathways converge, where there are neurons broadly tuned in frequency but specific in spatial localisation of sound.

�In the mammalian brain, axons from the cochlear nuclei project to several brain stem auditory nuclei, so there are many possibilities for interconnections among the relay nuclei. From the cochlear nucleus, axons project along three pathways:

the dorsal acoustic stria

the intermediate acoustic stria

the trapezoid body

this contains fibres that go to superior olivary nuclei on both sides of the brain stem

the medial superior olive is concerned with sound localisation on the basis of interaural time differences � this nucleus is composed of spindle-shaped neurons with one medial and one lateral dendrite, which receive input from the contralateral and ipsilateral cochlear nuclei, respectively. The binaural cells in the medial superior olive are very sensitive to phase differences between continuous tones presented to the two ears

the lateral superior olive is concerned with interaural differences in sound intensity

�Axons arising from the superior olivary nuclei join the crossed and uncrossed axons from the cochlear nucleus to form the lateral lemniscus. Thus there is extensive bilateral auditory input in the CNS from the outset, so that lesions of the central auditory pathwya do not cause monaural disability. The lateral lemniscus courses through nuclei of the lateral lemniscus, where some fibres synapse. Here again there is extensive crossing between the two sides through Probst�s commissure. All fibres in the lateral lemniscus eventually synapse in the inferior colliculus. The cells of the inferior colliculus receive binaural input and are arranged tonotopically. Most of the cells in the inferior colliculus send their axons to the medial geniculate body of the thalamus on the same side of the brain. The cells in the medial geniculate body send their axons to the ipsilateral primary auditory cortex in the superior temporal gyrus (Brodmann�s areas 41 and 42).

 

�The primary auditory cortex contains several distinct tonotopic maps of the frequency spectrum, analogous to the multiple representations of the periphery in the somatic sensory and visual cortices, e.g. layer IV is the input layer, layer V projects back to the medial geniculate nucleus, and layer VI projects back to the inferior colliculus.

organisation of the primate auditory cortex (Brugge):

functionally organised into columns � binaural cells are found clustered into two alternating columnar groups, summation columns and suppression columns, running from the pial surface to the underlying white matter

in summation columns, the response of a cell to binaural input is greater than to monaural input

in suppression columns, input from one ear is dominant; the response of a clell to input from the dominant ear is greater than to binaural input � columns of this kind may be related to spatial maps of sound localisation in the cortex

important callosal connections � zones that do and don�t receive them are interspersed

extensive inputs from each ear in both hemispheres, so unilateral lesions of the auditory cortex do not dramatically disrupt the perception of sound frequency (though they do affect the ability to localise sounds slightly). Each hemisphere is concerned mainly with localising sounds on the contralateral side.

Broca�s area and Wernicke�s area are related to the perception of speech sounds.

Oddly enough, the neural machinery in echolocating bats uses many of the same sound cues known to be important for speech (Saga). The sounds emitted by echolocating bats have two principal components:

a constant frequency component similar to the formants in vowel sounds

a frequency modulated component similar to the changing frequencies in consonants

between the emitted sound and the echo, bats must distinguish 16 components to gauge the velocity and distance of their prey � this is qualitatively similar to the task of perceiving the subtleties in speech sounds

there is strong evidence that the bat�s cerebral cortex includes areas where harmonic combinations of frequences are represented � although no such cells have yet been found in the human brain, they would be ideal detectors of the multiple frequencies in the speech sounds produced by the human voice

�In addition to parallel pathways, the auditory system has an extensive set of feedback connections, e.g. some cells in the auditory cortex send their axons back to the medial geniculate nucleus and some back to the inferior colliculus. The inferior colliculus sends recurrent fibres to the cochlear nucleus. A cluster of cells near the superior olivary complex gives rise to the efferent olivochlear bundle, which terminates either on the hair cells of the cochlea directly or on the afferent fibres innervating them. These connections may be important for regulating attention to particular sounds by modulating the transduction mechanism in the organ of Corti.

Notes - Kaas & Hackett, 1999, �What� and �where� processing in auditory cortex

they consider the idea that perception (in all three major sensory systems) divides neatly into what and where processing pathways

support comes from:

lesions studies in monkeys, in which impairment of spatial abilities or object identification can be separated

anatomical studies, showing that connections in the visual pathways are fairly strongly segregated

the auditory system too must analyse both identity and location of stimuli

the initial signals from the two cochleas are conveyed to a complex network of pathways and nuclei in the brainstem and thalamus, where spectral and temporal information is extracted to determine the identity and location (requiring binaural comparison in the brainstem) of the sound source

Romanski et al examined the connectivity of higher auditory cortical areas in macaque monkeys, using a combination of anatomical tracer dyes with electrophysiological recordings. their results support the ventral/dorsal temporal/parietal what/where processing dichotomy, contributing to functionally distinct regions of the frontal lobe

primate auditory cortex has three similar primary or primary-like areas, each tonotopically organised and receiving activating inputs directly from the auditory thalamus

these project to 7 or 8 proposed fields - 3 are easy to get at with an electrode, and these seem to provide anatomical support for the beginnings of ventral and dorsal cortical processing streams - they project to largely different portions of the frontal lobe

however, the middle belt area makes ocnnections to teh frontal lobe that overlap those of the two putative streams, indicating possible intermediate or additional auditory streams (also analogous to the additional functional streams or 'streams within streams' found in the visual system)

Notes - Moore & King, 1999, Auditory perception: the near and far of sound localisation

Most experiments on auditory localisation have been concerned with horizontal and vertical positions of sound sources, ignoring the third dimension - distance. There has been little work on this since von Bekesy, until Bronkhorst and Houtgast's demonstration using virtual sound technology that �the perception of sound distance in an enclosed room by human listeners can be quite simply modelled by fitting a temporal window around the ratio of direct-to-reverberant sound energy; and Graziano et al have shown that neurons in the frontal cortex of monkeys respond preferentially to sounds presented at particular near distances, within a hand grasp of the monkey's head.�

Distance cues in an enclosed space

�In addition to the classic cues of interaural time and level differences, sound localisation in the horizontal and vertical planes - 'direction perception' - is known to depend on spectral cues provided by the directional filtering of higher frequency sounds by the body, head and particularly, outer ear. Sound direction perception usually works best in the 'free field' (or anechoic rooms). However, distance perception of an unfamiliar sound is not particularly good in the free field - at distances greater than the sound's longest wavelength � because it is largely determined by, and therefore confounded with, the level of the sound.�

virtual space sounds - the cues provided by the head, the ears and the room are measured, digitally synthesised and mixed with the acoustic characteristics of the presenting headphones - indistinguishable from real sounds presented within rooms by distant loudspeakers

this has enabled the independent manipulation of various cues

Neural encoding of sound source distance

"Neurophysiological studies with a range of spcies have shown that neurons throughout the central auditory pathway tend to be tuned, to a greater or lesser extent, to the direction of a sound source, e.g. echolocation in bats.

The activity of the multi-sensory mammalian superior colliculus can be enhanced or suppressed when stimuli of different sensory modalities are presented in combination, especially depending on the relative timing of the signals. For visual-auditory neurons, the largest response enhancements are often observed when the auditory stimulus is delayed with respect to the visual stimulus, a consequence of the difference in the time course of the transduction mechanisms for these two sensory systems.

There is less evidence for distance tuning in the auditory system of non-echolocating animals. Graziano recorded from the monkey's ventral premotor cortex which, like the superior colliculus, is a multisensory area involved in the sensory guidance of movement. They have shown that the auditory receptive fields of ventral premotor cortex neurons, like their visual counterparts, rarely extend beyond 30cm from the head (and are therefore restricted to a region of space within the monkey's reach).

About 60% of these neurons responded more strongly to broadband sounds positioned 10cm from the head compared to those at distances of 30cm or 50cm.

These ventral premotor neurons had overlapping visual, auditory and tactile receptive fields - this seems to be for control of head and arm movements toward or away from stimuli in the vicinity of the animal's body

Basis for auditory distance representation in the cortex

"There is, however, a fundamental difference in the way that auditory distance was examined in the two studies. The virtual space stimuli used by Bronkhurst and Houtgast simulated source distances of a metre or more - the far field, and refers to the region of space within which both monaural and binaural cue values are essentially independent of distance. In contrast, the distances in the Graziano et al. study were within the near field, a more complex region within which energy circulates without propagating. Monaural spectral cues and interaural level differences associated with near-field sound sources therefore vary with distance, providing a possible basis for distance discrimination by both individual neurons and human listeners. This is obviously useful for localising nearby sounds, but doesn't establish whether auditory neurons in non-echolocating mammals are sensitive to the other cues available for more distant sound sources."

Notes � web, �Locating a mouse by its sound�

Payne had shown that the owls make use of their superb scotopic vision to pounce on mice in very dim light. However, they are still able to catch the mouse even if the lights are turned off as the owl is leaving its perch, as long as the mouse makes some sort of noise. This ability is impaired if the owl's ears are plugged.

"Eric Knudsen and Mark Konishi took this behavior into the lab and developed a technique to assess the animal's ability to localize sound. They trained an owl to sit on a perch in a dark, sound proof, anechoic room and attend to a sound produced by a speaker that could be positioned anywhere about the owl. When an owl localizes a sound, it turns its head, toward the direction of the sound. The owl's localization of sound was monitored by recording its head position with a special device -- a simple detector mounted on the head recorded induced electric currents produced by a pair of electric coils mounted around the animal's head. The speaker was mounted on a track and its position was controlled by a computer."

Owls' ears are not symmetrical - the left one points down and the right one points up. "A partial block of the left ear causes the animal to miss the speaker location by orienting the head just a little to the right and up from the actual location", and vice versa. These errors were greater in elevation than in azimuth. Based on this result, and the anatomy of the two ears, it seems that the owl uses IID.

By feeding different signals into the two ears, Moiseff and Konishi showed that the owls were calculating the OTD for a particular location in the acoustic world since the animal looked in the expected direction.

"Intensity differences (elevation) via IID in Nucleus Angularis (to the ventral nucleus of the lateral lemniscus, VLVp) and time differences (Azimuth) via ITD in Nucleus Magnocellularis (to the Nucleus Laminaris"

"Takahashi, Moiseff and Konishi [in 1984] used a local anesthetic to show that binaural intensity differences giving rise to space-specific cells in the MLD require the normal activity of the nucleus Magnocellularis. The processing of OTD requires the nucleus Angularis. Manley, Koppl, Carr and Konishi [in 1988]) showed that IID from L/R Nucleus Angularis is processed in the VLVp region of the Nucleus of the Lateralis Lemniscus while OTD from L/R Nucleus Magnocellularis is processed in the Nucleus Laminaris"

Notes � web, �A brain map in auditory space�

Konishi & Knudsen�s (1977) study with owls� sound localisation abilities was an attempt to answer the question, �Why do we have two ears?�, given that one is more or less sufficient for identifying sounds. Most hearing creatures can identify where sounds are coming from using auditory cues alone, but owls (especially barn owls) so excel at the task that they can hunt in the dark.

They used microelectrode recording while a remote-controlled speaker was moved around the owl's head, imitating sounds the owl would hear in the wild. They identified an area in the midbrain, where ten thousand 'space-specific' neurons would fire only when sounds were presented in particular location. These cells are topographically arrayed, such that aggregates of space-specific neurons pinpointed precise vertical and horizontal coordinates of the speaker, firing only when a tone was played at that location.

"Since sound is a mechanical pressure disturbance in the medium, it is inherently non-directional -- that is, PRESSURE is a Scalar (magnitude) quantity rather than a VECTOR quantity. So, a simple pressure detector (such as the moth ear) is unable to determine the direction of the sound source. "Clever design" however, can result in directional sensitivity. One solution is to use two ears and monitor interaural differences in the arriving sound at the two detectors (we do this).

For example, it is possible to determine a sound's source direction by comparing Interaural Intensity Differences (IID) if the structures separating the two ears shadow the sound sufficiently (or if the distance is large enough for there to be differential attenuation). This effect is wavelength dependent (high frequency sound with shorter wavelengths are attenuated more and are shadowed more easily by the head.

In addition, if the ears are far enough apart, then the time of arrival of the sound at the two ears will differ and Interaural Time Differences (ITD) can be used (either Ongoing Time Disparities, OTD, or sound Onset Disparities, OD). The owl uses both IID and ITD (so do we)."

Notes � web, �Biology 480: notes on Sci Am article�

"A sound originating from directly in front of your head has an equal intensity in each ear and arrives at the same time at each ear.However if the sound is displace to the left or right, both the intensity and the time of arrival will be different at each ear. Consider the following experiment: a subject is asked to point to the approximate location in space of a sound delivered through earphones.When the sound is louder in the left earphone and arrives earlier than the sound in the right earphone the subject perceives that the sound is to the left of the midline

the location of the owl's head was measured using the search coil technique - when the head is moved, it causes a variation of the flow of current proportional to both the horizontal and vertical movement of the head.

They found that altering the timing between two sounds caused the owl to move his head in the horizontal plane, and that altering the intensity between sounds in the two ears caused the owl to move it's head in the vertical plane

The owl's midbrain contains a map of sound location. neurons located near the midline of the nucleus sensitive to sounds the left of the animal's head and neurons located near the lateral edge of the nucleus sensitive to sounds in front of the animal's head. The neurons' RFs overlap, providing a continual representation.

Cells in the auditory nerve that project to the cochlear nuclei are sensitive to both the frequency and intensity of sound.Each cell responds best to a limited frequency range and as an ensemble, cells are sensitive to a broad range of sound frequencies.

Each cell in the auditory nerve also exhibits "phase locking" to a certain part of the sound wave.A particular cell will fire an action potential at almost exactly the same point in the sound wave for each successive cycle of the sound wave.This response pattern helps to mark the time of arrival of the sound at each ear

Each cell in the auditory nerve projects to two distinct brain nuclei: the nucleus magnocellularis and the nucleus angularis.Here the information about timing and intensity splits into two pathways: the timing pathway includes the nucleus magnocellularis and n. laminaris, and the intensity pathway includes the n.angularis and lateral lemniscal nuclei

Cells in the n. magnocellularis also exhibit phase locking to stimuli, however they are insensitive to the intensity of the stimulus.In contrast, cells in the n. angularis do not phase lock, but change their firing frequency according to the intensity of the stimulus

Cells from the n. magnocellularis project to the n. laminaris.It is here that the computation of time differences takes place.The n. laminaris is a bilateral structure, and each nucleus receives input from each ear.The n. laminaris is the first place where inputs from the two ears are combined

In 1948, Jeffress proposed a model by which timing differences could be computed by the auditory system.He suggested that signals from each ear would vary in how rapidly they traveled to the same location in the brain.He proposed that a specific class of neurons (called coincidence detectors) would only fire when inputs from the two ears arrived simultaneously

For example: imagine a sound source located close to the left ear.The sound signals would reach the left ear sooner than the right.Thus a coincidence detector would fire only when the two signals arrive simultaneously.Thus the signals coming from the left ear need to be delayed in time to allow the signals from the right ear to catch up.Catherine Carr did an anatomical analysis of this circuit and demonstrated the anatomical basis for the delay of signals within the auditory circuit.

Each coincidence detector is sensitive to a specific time difference.Some will be stimulated when the sound is directly in front of the animal, whereas another coincidence detector will be stimulated when the sound is closer to the right ear

Notes � web, �Birder�s world, Those amazing birds: seeing with ears�

�The facial disk, characteristic of all owls, is best developed in the Barn Owl. It consists of several layers of densely packed, stiff feathers located in circular rings. Each feather has a wide shaft and short barbs, and bends forward near the tip. In addition, two prominent grooves in the facial disk run from either side of the mouth up past the ear openings. These grooves are not unlike the grooves in our outer ear, which help funnel sounds into the ear. An essential feature of the facial- disk grooves is that the feathers lining them are designed to reflect high frequency sound waves into the ear. Consequently, the grooves in the facial disk selectively concentrate and direct to the ears such sounds as the squeaks of mice and rustle of leaves.�

Questions

how can we tell elevation???

how does having the two ears pointing up and down help the owl???

how can we tell front and back???

are the spatial maps in the higher levels of the auditory cortex head-centred???

Kandel and Schwarz

what�s the pinna � the outside bit???

�the largely cartilaginous projecting portion of the external ear�

�the scala vestibuli and scala tympani communicate with each other at the helicotrema where the scala media ends, so that perilymph is continuous�???

endolymph/perilymph

�Thus, the cochlear compartments convert the differential pressure between the scala vestibuli and scala tympani into oscillating movements of the basilar membrane that excite and inhibit the sensory transducing cells in the organ of Corti�???

apical � related to or situated at an apex

�parallel arrays of cross-bridge actin filaments�???

how quickly can the outer hair cells change their own length???

what does �monaural disability� mean???

would it be fair to say that the auditory system does a lot more pre-cortical processing???

�pial surface�???

binaural interaction columns

Kaas & Hackett

is there evidence for a what/where division in the somatosensory system???

does it make sense to talk of faster processing by having 3 parallel primary areas, when simply lumping them together would not slow things down in a connectionist system, would it???

Moore & King

just how accurate is human sound localisation??? what�s its adaptive value??? how far does it extend � reaching distance only???

with the 10, 30 and 50cm distance trials, did they increase/decrease the stimulus level so that the only variable in the cue was distance???

it sounds as though they didn�t, but they had a control of increasing level but keeping the same distance, I think

where did they measure the centre of the azimuth from??? did they find complementary neurons that preferred distant stimuli???

ratio of direct-to-reverberant sound energy???

temporal integration window??? duration 6ms that agrees closely with other estimates of auditory temporal processing???

the 'acoustically-responsive neurons showed some tuning for sound azimuth'???

web, �Locating a mouse by its sound�

ah, so unlike in the PPC, there are actually �space� cells with different cells representing different head-centred coordinates??? is that any different to a head-centred frame???

MLD???

why should timing indicate azimuth alone, and intensity elevation (in the owl)???

web, �A brain map of auditory space�

web, �Biology 480: notes on Sci Am article�

web, �Birder�s world, Those amazing birds: seeing with ears�